Piaget? Vygotsky? I'm Game! 1 Running Head: Agent-based Modeling for Psychology Piaget? Vygotsky? I'm Game!: Agent-based Modeling for Psychology Research
نویسنده
چکیده
We discuss agent-based models (ABM) as research tools for developmental and social psychology. “Agents” are computer-based entities, e.g., “people.” The modeler assigns the agents real-world roles and rules, conducts simulation experiments in which the agents follow their rules, and observes real-time data. Agent-based models have some properties that can be very useful to psychology. Agent-based models are more dynamic and more expressive as compared to diagrammatic models. Also, simulations afford immediate feedback on the validity of the models. Agent-based modeling is useful for understanding complex phenomena, e.g., the dynamics of multiple individual learners interacting with their peers and with artifacts in their environment and the emergent group patterns arising over time from these multiple interactions. We demonstrate an agent-based simulation that we designed as a “thought experiment” that can shed light on the ongoing debate between two theories of learning, constructivism and social constructivism. The process of building the simulation and embodying these learning theories in explicit rules is an example of how agent-based modeling may help researchers in honing their theories. When “running” the models, unexpected consequences arise, and this leads to successive refinement of theory. Piaget? Vygotsky? I’m Game!: Agent-Based Modeling for Psychology Research “[T]he social fact is for us a fact to be explained, not to be invoked as an extra-psychological factor.” Piaget (1962/1951, p. 4) “The development of thought is to Piaget, a story of the gradual socialization of deeply intimate, personal, autistic mental states. Even social speech is represented as following, not preceding, egocentric speech.” Vygotsky (1962/1934, p. 18) In this paper, we examine a quasi-experimental methodology, agent-based modeling, that may constitute a “common denominator” or lingua franca for carefully articulating and comparing personal and inter-personal factors contributing to learning. Whereas we focus on the case of learning, this type of experimentation, we argue, can illuminate many issues facing developmental and social psychologists as well as education researchers. Background Recently, at an international conference, an author was criticized for espousing both constructivism and social-constructivism as theoretical frameworks for conducting research of student learning, as though these twain shall never meet, let alone collaborate. Despite efforts to reconcile these schools of thought (e.g., Tudge & Rogoff, 1989; Cole & Wertsch, 2002; Fuson & Abrahamson, 2005), by and large they are still considered rival rather than complementary. The rift between these schools is most evident in the ongoing efforts of Learning Sciences scholars to create new paradigms that encompass both schools. For example, Greeno (2005) explains that his situative perspective “...builds on and synthesizes two large research programs in the study of human behavior....cognitive science [that] focuses on individuals, although it occasionally considers social interactions as context of individual cognition and learning....[and] interactional studies [that] focuses on performance of groups of individuals engaged in joint action with material and informational systems in their environment....[Yet] neither [of these research programs], by itself, can explain learning.” Why is this dichotomy still prevalent? It could be that the lingering rivalry between cognitive and interactional perspectives on learning is a vestige of college and graduate-school introductory courses that neatly compartmentalize the literature into juxtaposed schools of thought—a juxtaposition that traces the historical evolution of research on learning by foregrounding distinctions between schools but does not provide tools for examining the complementarity of these paradigms. Also, some scholars may inadvertently disseminate and, thus, perpetuate an apparent schism in the field of research on learning, because they are still entrenched in paradigms they themselves were schooled in (Kuhn, 1962). So we are faced with what appears to be an intellectual impasse, an impasse that perpetuates an historical schism that may be ill founded to begin with. It appears as though we haven’t the methodological wherewithal to ponder this juxtaposition in ways that we would be comfortable to endorse as experimentally rigorous, theoretically coherent, and rhetorically compelling. We submit that holding on to such a position—that one of the schools is essentially more revealing of patterns and mechanisms of learning—limits the scope of effective research and educational design, especially design-research for classroom learning. It could be that the two “camps” do not interact, because they lack a common “campground.” That is, fruitful discourse between Piagetians and Vygotskiians is constrained, because they operate in parallel paradigms including separate perspectives, methodologies, and terminology. In the remainder of the introduction, we discuss agent-based modeling as a support to research. Agent-Based Modeling (ABM) This paper proposes the viability of agent-based modeling (ABM) environments (e.g., ‘NetLogo,’ Wilensky, 1999a; ‘Swarm,’ Langton & Burkhardt, 1997; ‘Repast,’ Collier & Sallach, 2001) as research tools for articulating and examining hypotheses (Wilensky, 2001; Wilensky & Reisman, 2004) in the domains of developmental and social psychology, such as in the research of learning. ABM of human behavior is a growing research practice that has shed light on complex dynamic phenomena (e.g., Kauffman, 1995; Holland, 1995) such as residential segregation (Schelling, 1971), wealth distribution (Epstein & Axtell, 1996), negotiation (Axelrod, 1997), and organizational change (Axelrod & Cohen, 1999). Learning, too, is complex—it involves multiple agents interacting with artifacts and peers—so research of learning may, too, benefit from ABM. To demonstrate the utility of ABM as “thinking tools” for psychology research, we have designed and implemented in NetLogo the “I’m Game!” simulation, in which agents, computer-based entities, “learn” through playing either as individuals, social interactors, or both. The NetLogo environment (Wilensky, 1999a) was designed with the explicit objective that it provide a “low threshold, no ceiling” for students and researchers alike to engage in agent-based modeling (Wilensky, 1999b; Tisue & Wilensky, 2004). The vision is that building simulations will become a common practice of Science and Social Sciences scholars investigating complex phenomena—the scholars themselves, and not hired programmers, build, run, and interpret the simulations. To these ends, the NetLogo “language” has been developed so as to be accessible—easy to write, read, and modify, thus making NetLogo very much distinct form common general-purpose languages like Java and C++. In order to introduce ABM further, we will examine, in later sections, the NetLogo code that underlies the “I’m Game!” simulation and expresses the theoretical assumptions of the modeler (on how social entities are expected to behave under particular conditions). The entire NetLogo code for the “I’m Game!” model occupies about five pages of sparse programming “text,” or a total of just over 820 “words” and symbols (under 4000 characters), of which 455 are the core code, and the rest is auxiliary to the simulation itself—various setup, monitoring, and graphing procedures (see Appendix A). Thus, a caveat is, perhaps, due at this point, for readers with little or no experience with computer-based simulations who are concerned with the brevity of the code vis-à-vis their bookshelves-full of relevant literature. First of all, the model does not purport to contain interpretations of behaviors but only to support an inquiry into these behaviors. Secondly, it is the nature of NetLogo-type code that complicated syntactical structures can be condensed into short “sentences.” Thirdly, the nested structure of computer procedures, combined with a defined parameter space and a randomness generator, allows for a limited number of specifications to generate a panoply of individual scenarios (we can “collapse” the combinatorial space of possible events). That is, the dynamics and randomness-capacity of the simulations is specifically geared so as to allow the narrative of various case studies to emerge through running the simulation with a “cast” of numerous agents who are operating in parallel. In other words, enfolded or embedded within the condensed code is the potentiality of literally endless “cases,” a potentiality that bears out when the program is activated and the data are automatically collected and measured. In summary, ABM shares with classical experimental designs the logic that particular aspects of a studied phenomenon are foregrounded for observation, measurement, data collection, and analysis. However, ABM is distinct from classical designs in that the experiment is simulated—it does not actually happen “out there”—not even in a classical laboratory. In this sense, ABM is more like a thought experiment, a computer-enhanced Gedankenexperiment—it is a means of exploring the eventualities of beginning from a set of well defined simple rules and allowing the experimental “subjects” to act out these rules with a measure of randomness, possibly giving rise to surprising group patterns. Ultimately, results of these experiments must be interpreted with caution. Even if group patterns emerge that appear convincingly similar to behaviors “in the world,” one should maintain salubrious skepticism, in case critical factors were not included in the model. But, then again, such tenuousness is a feature of any post-Popperian scientific research. Learning Through Modeling Modeling is not only for professional scientists. Researchers of learning concerned with the design of learning environments are generally unanimous in pointing to the efficacy of building models as a form of learning (Hmelo-Silver & Pfeffer, 2004; Lesh & Doerr, 2003; Wilensky, 1999b; Wilensky & Reisman, 2004). A process of modeling may consist of foregrounding elements of phenomena under inquiry, including these elements’ properties and behaviors, and accounting for these phenomena, e.g., by implementing explanatory mechanisms of causality. Through modeling, students (in the broad sense of ‘learners’ of all ages, walks of life, and expertise) concretize and articulate their understanding of content in the form of some residual artifact, such as a diagram, a narrative, an academic paper, or a computer procedure. Students can then display these artifacts in a public forum, explain their intended interpretation of these artifacts (i.e., what they are modeling), and negotiate with their peers both the adequacy of the model (that it is modeling what it purports to model) and their understanding of the phenomena as expressed in the model. Students discuss their understanding and modify it based on their own insight as well as on their peers’ response. One form of modeling that has been receiving increased attention is computer-based modeling. In computational environments that provide feedback to the user, students’ conceptual models and virtual models are mutually informative. That is, students’ reasoning and their computer procedures develop reciprocally: students “debug their own thinking”—that is, detect and correct their own 1 “A good scientific model....should have a certain ‘open texture’, giving scope for further exploration: it should not merely seek to summarise what is known. Then it can be a research tool as well as a teaching aid. The model itself can be studied for its properties and this may suggest new possibilities (or impossibilities) and raise new questions.... Of course you should never believe models, they only have an ‘as if’ status and they are likely to let you down any time” (Cairns–Smith, 1996, p. 46-7). misconceptions—through debugging the program procedures (Papert, 1980). For instance, students who attempt to cause a computer-based agent to bounce across the interface, realize that a “go up, go left, go down” procedure does not result in bouncing as they know it “in the world.” Through this breakdown experience, bouncing announces itself afresh, if to paraphrase Heidegger. Specifically, students become aware of the arc motion of the desired ballistic trajectory, and may be, thus, stimulated to formulate an improved procedure that results in the desired arc. Thus, through the very act of implementing within a computational environment their model of a phenomenon, learners have opportunities to develop more sophisticated understanding of the phenomenon. Agent-based modeling is particularly suitable for learning complex phenomena (Wilensky, 1999b, 2002). One reason complex phenomena are difficult to understand is that they can be perceived from two or more levels of coherency. For instance, a flock of birds flying in an arrow-shaped formation can be just that—a flock (the “macro level”)—or it could be regarded as a collection of individual birds each following some rules (the “micro level”). Both perspectives on the flock, the micro and the macro, are necessary to understand the flocking phenomenon, because it is through the rule-based interactions of the individual birds that the flock form emerges as a recognizable entity in the world. Therefore, a computer-based environment for modeling a flock and simulating its emergence requires tools for programming individual agents, e.g., the birds, parallel processing capabilities for the birds to operate simultaneously and interact, and functions for quantifying aggregate (macro) properties, such as the total number of birds or their mean orientation. A group of learners is a complex phenomenon, too, and so we have used the NetLogo multi-agent modeling-and-simulation environment to create the model that accompanies this paper. How do flocks relate to students? In creating the “I’m Game!” model, we sought to understand learning as an individual process that occurs within a social context, in which individuals both contribute to and, in turn, are influenced by this context of fellow learners. At the same time, we were interested in measuring the impact of rules we assign to individual students on the learning of the group. That is, we sought to examine any emergent group-level phenomena that may result from interactions between individual learners. In summary, modeling for learning affords much more than capturing a post facto product of a completed inquiry process. On the contrary, building the model is intrinsically intertwined in the learning process itself. The model is “an object to think with” (Papert, 1980). Computer-based modeling environments, in particular, bear constraints that scaffold practices with strong parallels to theoretical modeling: specificity and consistency in terminology, rigor in articulating relations between elements (rules, topology, temporality), definitions of the parameter space that is informed by implicit hypotheses, and coherence, which is tied to the esthetics and elegance of the code and the interface (a latter-day Occam’s Razor). Also, computation-based simulations can stimulate scholarly discourse by facilitating a sharing of methodology, experimentation, and data at a level of explicitness that enables thorough scrutiny and precise replicability (Wilensky, 2000; Tisue & Wilensky, 2004). We have discussed researchers’ increasing use of ABM for revealing patterns is complex social phenomena, and we have discussed the role of modeling in learning. In the remainder of the paper, we examine a case study of modeling a complex psychological phenomenon, ‘learning,’ and discuss results from operating this simulation. In particular, we will focus on how building the “I’m Game!” model of ‘learning through playing' transposed the negotiation between two theoretical camps to a shared campground where, rather than translating between camp-specific terminology, we adopted the computer-based lingua franca of NetLogo code to express the theoretical models of both camps. 2 By “group learning” we are not referring to distributed-cognition literature (e.g., Hutchins, 1995) but to indices that capture aggregate aspects of a collection of individual learners. In building our model, we did not intend to feature intentional collaboration between agents with a common objective. Agent-Based Modeling: A Case of Modeling ‘Learning Through Playing’ The “I’m Game!” model of ‘learning through playing” was created with the following objectives: (1) to demonstrate the viability of agent-based modeling (ABM) for examining socio/developmental-psychological phenomena; (2) to illustrate the potential of ABM as a platform enabling discourse and collaboration between psychologists with different theoretical commitments; and, specifically, (3) to visualize the complementarity of Piagetian and Vygotskiian explanations of how people learn. We strove to create a simulation that would be simple enough to carve the theoretical models at the joints yet would still allow a meaningful distinction between them. This reducto exercise proved useful as a means toward implementing the theoretical models in the NetLogo code, in the form of simple and concise procedures. To model human learning in social contexts, we chose the context of a group of children at play, because both Piaget (1962) and Vygotsky (1962) studied relations between playing and cognitive development. Also, games easily lend themselves to proceduralization in computer code as a set of 3 To run this experiment yourself, go to http://ccl.northwestern.edu/research/conferences/JPS2005/jps2005.html 4 Piaget saw games as “ludic activity of the socialized being” (1962, p. 142)—assimilation-based forms of practicing the execution of schemas (see also Huizinga, 1955). “There are practice games, symbolic games, and games with rules, while constructional games constitute the transition from all three to adapted behaviors” (1962, p. 110). Rule-based games, specifically, result from, “collective organization of ludic activities” (1962, p. 113). For the main, Piaget studied the function of play vis-à-vis his stages of cognitive development. Vygotsky, too, saw games as crucial for development—although occupying a limited part of the child’s time, play is, “the highest level of pre-school development” (1978, p. 102). For instance, Vygotsky regarded, “make-believe play as a major contributor to the development of written language—a system of second-order symbolism” (1978, p. 111). However, Vygotsky interpreted humans’ urge to play within a broader psycho–social web of practices, for instance as a substitute to unfulfilled or unrealizable desires. Also, he stresses the pervasiveness of imagination and rules in play: in early play, tacit rules, e.g., “sisterhood,” are made explicit; and, “Just as the imaginary situation has to contain rules of behavior, so every game with rules contains an imaginary situation” (1978, p. 75), even chess. He summarizes that, “The development from games with an overt imaginary situation and covert rules to games with overt rules and a covert imaginary situation outlines the evolution of children’s play” (1978, p. 96). The motivations for the gradual prominence of progressively rigid rules are interest conventional rules that agents follow. That is, we could have chosen the activity context of “Making Pasta” (recipe as rule-based), “Buying a Toy” (some algorithm that considers affective and financial input), or even “Making Music Together” (modifying pitch and tempo based on ongoing feedback from peers). But a context in which a group of participants are engaging in an explicitly ludic activity enables us, the modelers, to define unequivocally and, hopefully, incontestably, the specific objectives and underlying skills that participants need to develop in order to engage successfully in the organized activity. That is, by choosing a context in which skills are commonly discussed in terms of procedures (e.g., “If you see that someone is trying to steal base, then you should...etc”), we eschew debating the potentially controversial stance that any human activity is given to definition as a set of procedures that can be implemented as computer code. Finally, we chose a game involving marbles as homage to Piaget for his studies in these contexts (e.g., Piaget, 1971). The design problem for this model-based thought experiment was to create a single environment in which we could simulate both “Piagetian” and “Vygotskiian” learning. In the interest of foregrounding the theoretical differences between these two perspectives, we focused on the differential emphases they put on the contribution of the social milieu to individual learning. Thus, we wanted to visualize and measure incremental improvement in learners’ performance in the marbles game under two conditions: (a) “Piagetian”—players interact with objects in their environment, accommodating their schemas based on feedback; (b) “Vygotskiian”—players learn through imitation by participating in an organized activity that includes others who are more experienced in performing a certain target skill; and (c) “Piagetian–Vygotskiian”—learners learn both from their own performance and from the performance of others. Clearly, such typifications of “Piagetian” and “Vygotskiian” learning are gross caricatures of these theoretical models. Yet the simple agent rules stemming from these typifications (simply running becomes boring) and a social caveat to distinguish more clearly between work and play coupled with the regulatory assimilation of a rule-based social ecology. 5 Piaget reports on studies of children’s understanding and attitude towards the rules of the game of marbles. Our simulations focus not on learning rules but on learning to perform better in the game. may be sufficient to generate data revealing interesting behavioral patterns at the group level. One could always build further from these rules by complexifying the agents’ rules as well as the contexts of the simulated “world.” It is possible to simulate two different types of behavior—“Piagetian” or “Vygotskiian”—within the same model. This is achieved by defining different sets of rules for the agents and running the simulation either under one of these conditions or the other. In fact, as we will detail in a later section, it is possible to simulate a combination of these two behaviors—“Piagetian and Vygotskiian”—by creating an option in which both types of behaviors are active and some selection procedure then governs a resolution as to which output shapes the subsequent behavior of the agents. For instance, the selection could be utilitarian—the output is selected on the basis of its goodness, that is, the degree to which it helps the agent achieve a specified objective. Figure 1. Snapshots from successive experimental runs of the NetLogo “I’m Game!” simulation of playbased “Piagetian” and/or “Vygotskiian” learning: Personal and interpersonal learning are reciprocal. In the interest of enhancing the “legibility” of the simulation’s computer interface, we sought a marbles game in which a learner’s skill can be visually indexed even without resorting to quantitative metrics that usually appear in interface display windows in the form of numerical values. We eventually chose to model a game (see Figure 1, above) in which contestants all stand behind a line and each throws a marble, trying to land it as close as possible to a target line some “yards” away (30 NetLogo ‘units’ away). This game, we find, enables – at a glimpse – to evaluate how well players are doing, both as individuals (distance to the target line) and as a group (mean distance to the line or density). We will now explain the game in more detail, focusing on how we implemented, measured, and displayed Piagetian and Vygotskiian learning. We will then turn to some of the core computer procedures underlying the simulation, so as to demonstrate how the NetLogo computational medium enables an implementation of theoretical models (see Appendix A for the entire NetLogo code of this model). Rules of the “I’m Game!” Agent-Based Model of Learning in Social Contexts Players stand in a row (see Figure 1a, above). They each roll a marble at a target line. Some players undershoot the line, some overshoot it (see Figure 1b). Players collect their marbles, adjust the force of their roll, and, on a subsequent trial (Figure 1c), improve on their first trial—they have “learned” as individuals. Gradually, the group converges on the target line (see Figure 1d, 1e, and 1f, for three later attempts). Figure 2, below, shows the entire interface of the “I’m Game!” interactive simulation, including all controls and displays. We simulated four learning strategies: (a) “Random”—a control condition, in which players’ achievement does not inform their performance on subsequent attempts, unless they are exactly on target; (b) “Piagetian”—players learn only from their own past attempts; (c) “Vygotskiian”— players learn only by watching other players nearby, not from their own attempts; and (d) “Piagetian–Vygotskiian”—players learn from both their own and a neighbor’s performance. In Figure 2 6 When we drop the scare quotes from “Piagetian” and “Vygotskiian,” we still bear in mind that the model is a caricature of the theories. we see results from having run the simulation under the four conditions and over about eighty experimental runs, each consisting of thirty attempts. The graph in the center, “Average Distance,” shows the group mean distance from the target line by strategy, and the histogram on the right, “Strategy Averages,” records the group’s cumulative mean performance by strategy. (Note that attempts are measured by the absolute distance of a marble from the target line, so in both the graph and the histogram, lower scores are associated with better, or more skilled, performance.) Figure 2. Interface of the NetLogo “I’m Game!” interactive simulation of learning. Under these particular experimental conditions, and aggregated over repeated runs, the group mean performance (distance from target) is ranked “Piagetian–Vygotskiian,” “Piagetian,” “Vygotskiian,” and “Random.” To make the learning process more realistic, we implemented a random-error parameter that introduces “noise” into the learning process (in Figure 2, above, it is set to 12, in the slider that is third from bottom, on the left side of the interface). Also, we implemented the “#-Vygotskiian-neighbors” parameter (in Figure 2, above, ““#-Vygotskiian-neighbors” is set to 10, in the second-from-bottom slider, on the left) that controls the number of neighbors each player observes in the Vygotskiian strategy (the player’s subgroup). Finally, we incorporated a “ZPD” (zone of proximal development) slider (see in the center of the button-and-slider section). This variable (here, set to 10) limits the range of performances that players can imitate. For instance, a player can imitate a neighbor only if the neighbor’s score was better by 10 units or less, as compared to the player. So (see Figure 2, above, the picture with the “marbles”), the bottom player cannot imitate the player three rows above it, because the difference between their respective performances is larger than the current value of ZPD, but it can imitate the players one and two rows above it, who are well within its ZPD. Note that the learning process involves “feedback loops.” That is, a player’s learning—the individual “output” of a single attempt—constitutes “input” for the subsequent attempt. In the “Piagetian” condition, this is a player-specific internal loop, and in the “Vygotskiian” condition one person’s output may be another person’s input on a subsequent attempt, and so on. Note also that over the course of a “Piagetian–Vygotskiian” run of the simulation, players might learn on one attempt from their own performance and on another—from the performance of a neighbor. Both sources of information are simultaneously available for each player to act upon. Computational Procedures That Implement a Theoretical Model An introduction to ABM would not be complete without a glimpse into the code that underlies the agents’ rule-based behaviors. The semantics and syntax of the NetLogo programming code is developed with particular attention to users’ expectations, coming from natural language (see Tisue & Wilensky, 2004). To the extent possible, code “primitives” (the basic commands that the program “knows”) are simple nouns, verbs, quantifiers, and prepositions, such as “turtle,” “breed,” “forward,” “back,” “jump,” “count,” “beep,” “any?,” “nobody,” “-from,” “-at,” or “clear-graphics.” Users “teach” the program new words (for actions and variables) for particular models they are building. For example, a variable in the “I’m Game!” model is “best-score” (and so are “score,” “moves,” and “best-maxmoves,” see code example, below). Once a variable is introduced, it can take on and continuously change values. In this model, “best-score” is a variable that each of the players updates after each of its attempts to land the marble on the target line—it is a measure of their best score so far. We will now look closer at examples of the NetLogo code. The purpose of the following section is not so much to teach the reader NetLogo as much as to convey that intuitive, yet not spelled out, ideas regarding phenomena under inquiry become explicit when one must articulate them in simple lines of code. Diving into NetLogo code. Following is an example of the computer procedure “p-adjust.” This procedure name is an abbreviation of “Piaget-adjust.” It is a set of rules that instruct each virtual player how to operate under the “Piagetian” condition when that condition is activated from the model’s interface and a marble has already settled. There are also special procedures for the Vygotskiian condition (v-adjust), the Piagetian-Vygotskiian condition (pv-adjust), and the random condition (radjust). (See, below, how the procedure begins with the infinitive form “to p-adjust” and ends with “end”; note that comments to the right of semicolons are ignored by the computer when it compiles, or “reads,” the code). to p-adjust ;; if your score is better, that's your new best, otherwise stick with the old if (score < best-score) [ ;; note that lower scores are better (closer to target line) set best-score score set best-max-moves max-moves ] end The agent “reading” this procedure does the following. It evaluates its score (how well it just performed in terms of its distance from the target line) and compares it to its best-score (the smallest distance it has personally achieved so far). If the score is less than the best-score, the player executes the following two actions that are enclosed between the brackets: (1) set best-score score: the player assigns to its best-score variable the value of the current score; and (2) set best-max-moves max-moves: the player assigns to its best-max-moves, which records how far the player threw its marble when it previously achieved its best-score, the value of max-moves, which records how far the player threw its marble on this current attempt. Note that in the case that the IF conditional it not satisfied, that is, when it is not the case that score < best-score, the player simply ignores the clause between the brackets. That means that the player does not change the values of its personal variables best-move and best-maxmoves, retaining them for the following attempt, whereupon it will again make the same comparison and judgment. What the p-adjust procedure essentially means is that the player is oblivious to all other players. It is only looking at its own achievement and possibly updating its “memory” accordingly. Under the Vygotskiian condition, however, the player looks around, and selects one of the other players in its vicinity. For the Vygotskiian and the Piagetian–Vygotskiian conditions, we implemented Vygotsky’s construct of the zone of proximal development (ZPD). If a player’s achievement on a given attempt is, say, 10 units away from the target, and the player sees a neighbor getting the marble only 3 units away from the target, the first player might imitate that neighbor’s performance, as long as the ZPD is at least 7 units large. That is, if the ZPD is set at 5 units, then that neighbor’s performance will be out of that player’s ZPD (because 7 is larger than 5) and, so, the player cannot imitate it. In summary, if the selected neighbor player did better on this trial and this superior performance is within the player’s ZPD, the first player adopts both the score and the best-max-moves of that neighbor. Otherwise, the player records its personal values from the current run. Under the Piagetian–Vygotskiian condition, the player first checks its current score and if it’s the best so far, the player updates its values. The player then selects a neighbor and only takes on that player’s values if the neighbor did better than itself and the score is within the player’s ZPD. Finally, under the Random condition, the player can only record its own current values and only if they are completely on target. Note that when the ZPD slider (see Figure 2, p. 14) is set at the value “0,” a player cannot learn at all from its neighbors, because any performance that is even just 1 unit superior to its own is beyond the player’s learning scope. Conversely, when the ZPD slider is set at “60” (the maximum possible distance from the target line), the ZPD is not constituting any constraint on the prospects of imitating the performance of neighbors. So, any values larger than “0” will increasingly enable the imitation of a wider range of neighbor performance. That is, we expect that under the Vygotskiian condition, individual learning, and, therefore, group learning, will increase as a function of the ZPD. The notion that the ZPD is given to manipulation may be confusing unless the magnitude of the ZPD is interpreted in proportion to the task: a small ZPD indicates that participants have much learning to do in order to accomplish the task; a very large ZPD indicates that even the lowest-performing participants do not have far to go. Table 1, below, summarizes the four conditions, displaying both the NetLogo procedure code and its explanation. Table 1. NetLogo Agent Procedures for the Experimental Conditions and Explanations of These Procedures Procedure Explanation “Piagetian” to p-adjust if (score < best-score) [ set best-score score set best-max-moves max-moves ] end If you have performed better than in the past, record both how well you did now and how far you threw the marble. “Vygotskiian” to v-adjust let fellow nobody while [ (fellow = nobody) or (fellow = self)] [ set fellow turtle (who + ( (#-Vygotskiian-neighbors / 2) ) + random (1 + #-Vygotskiian-neighbors ) ) ifelse (best-score > best-score-of fellow) and (best-score ZPD <= best-score-of fellow) [ set best-score best-score-of fellow set best-max-moves best-max-moves-of fellow ] [ set best-score score set best-max-moves max-moves ] end If a selected neighbor performed better than you and this advantage is within your ZPD, record both how well the neighbor did and how far the neighbor threw the marble. Otherwise, record your own values, regardless of how they compare to your own previous attempts. “Piagetian–Vygotskiian” to pv-adjust let fellow nobody while (fellow = nobody) or (fellow = self)] [ set fellow turtle (who + ( (#-Vygotskiian-neighbors / 2) ) + random (1 + #-Vygotskiian-neighbors ) ) ] if ( score < best-score ) [ set best-score score set best-max-moves max-moves ] if ( best-score > best-score-of fellow ) [ set best-score best-score-of fellow set best-max-moves best-max-moves-of fellow ] end If you have performed better than in the past, record both how well you did now and how far you threw the marble. Then, if a selected neighbor performed better than you and this advantage is within your ZPD, record both how well the neighbor did and how far the neighbor threw the marble “Random” to r-adjust if ( (abs pxcor) > 0 ) [ set best-max-moves ( max-dist / 2 ) 1 ] end If you have performed perfectly, record how far your threw the marble. Otherwise, don’t record anything. Finally, we will focus on a line of code within the “to adjust” procedure (see Appendix A), in which players are “preparing” to throw the marble. That is, they are each assigning to their personal variable, max-moves, a value that is based both on their prior learning, best-max-moves, and on a random element, which we have underlined (see below). There are variations on this code for the Piagetian and the Vygotskiian conditions, which reflect differences between these conditions, but the core code is as follows: set max-moves ( best-max-moves + random-normal 0 ( error * best-score / max-dist ) ) The underlined code primitive random-normal takes two arguments, a mean and a standard deviation, and reports a value based on those arguments. In the random-normal code clause, above, the mean is 0 and the standard deviation is a function of three variables: (1) error, which the user sets on the interface (see Figure 2, on p. 14); (2) best-score, which the player took from its last attempt (see in this section, above, the various conditions for computing this value); and (3) max-dist, which is a “global variable” (not personalized) and simply means the total horizontal length of the screen (how far a marble can go before it bounces back off the wall). Note that the standard-deviation argument takes two global variables, error and max-dist, and one player-specific variable, best-score. Recall that as the player improves, it achieves lower best-score values (the marble is stopping nearer to the target line). So the quotient of this player’s best-score / max-dist is decreasing with the player’s improvement. Thus, the standard-deviation, too, is decreasing. So what is this line of code modeling, and why did we choose to model this? Each player has its own “personalized” distribution of anticipated performance that is based on its prior performance. As the player improves, it calibrates its performance, so the better the player, the higher the calibration. In other words, the error is normally distributed around the intended target, but that distribution of error shrinks as the player approaches perfect performance. Think of a professional 7 For the purpose of this explanation, we have simplified the original code by collapsing two subconditions into a single condition and deleting an auxiliary procedure. basket-ball player as compared to an amateur. From any given range, the professional will, on average, get the balls closer to the basket as compared to the amateur. But, also, the professional’s errors around the basket will be smaller as compared to the amateur who will be throwing the ball “all over the place.” Finally, note that all the above explained the procedures within a single run. To produce the graphs in Figure 2 (see p. 14), the procedures were run over and over, with each procedure outputting values that are recorded in global lists (independent of the agents) and plotted onto the graphs and, periodically, onto the histogram (in the “Average Distance” plot, for example, each squiggly graph line records runs that are each of 30 group attempts). So the graph and histogram depict the aggregation of results from multiple runs under each of the four experimental conditions, making comparisons statistically meaningful. Summary. We have now completed explaining the core procedures of the “I’m Game!” model, both in terms of the visible behaviors on the simulation interface and in terms of the code that underlies these rule-based behaviors. We have attempted, to the extent possible within the scope of this paper, to “glass-box” the model. That is, we have tried to present and explain the core procedures so as to afford the reader scrutiny of the model. The reader can now, hopefully, examine the code so as to judge its reliability (that it causes the agents to behave as we claim they behave) and its validity (that the agents’ behavior reflects relevant key aspect of the real behaviors “out there,” at least according to theoretical perspectives that we purport to be modeling). We will now present results from running the experiment, discuss several emergent behaviors observed when operating the model, and end with concluding remarks. Results In earlier sections of this paper, we have delineated our motivation to build a computersupported model of a socio-psychological phenomenon. Specifically, we discussed the advantage of agent-based modeling (ABM) for studying complex phenomena, such as learning in situated contexts. Moreover, we argued that the process of building the model in and of itself constitutes an important factor of the learning that the model ultimately enables the researcher (“learning through modeling”). We then introduced the “I’m Game!” model of learning in situated contexts that we built in NetLogo and explained key elements of the programming code underlying the simulation. We now present results from running the model, including our insights from analyzing data from multiple runs under a variety of parameter settings. Experimental Results The “I’m Game!” model includes several parameters, such as the size of the execution error, the number of neighbors the player selects from, and the size of the ZPD. Therefore, the space of possible parameter-setting combinations for running this model is vast. Nevertheless, we will focus in this section on results from running the model under a specified range of settings, in each of which we gradually changed only one independent variable. Each specific parameter combination was run several times, and the values reported and displayed in graphs represent the collections of mean values from these iterations. Experimental results from running this simulation are as following. Main effect and control. The different strategies had main effects—they resulted in consistency within learning conditions and differences between them. The “Random” baseline condition consistently produced the weakest learning. Also, learning can occur under purely “Piagetian” or purely “Vygotskiian” conditions, but learning under these combined conditions exceeds learning gained under each strategy alone. Finally, whether the Piagetian learning is greater than the Vygotskiian learning or vice versa depends on combinations of the settings of the parameters “Vygotskiian-neighbors,” “ZPD,” and "error." Error and learning. Under all three study conditions, we observe a strong direct relation between the execution error and learning (that is, an inverse relation between the error and the score; see Figure 3, below). This relation is stronger under the Piagetian condition than under the Vygotskiian condition. This Piagetian advantage suggests the utility of exploration when learners must rely on personal resources only. When the error is low, Vygotskiian learners are liable to arrive at a group deadlock—all consistently undershooting or all consistently overshooting, as though they are converging on the target asymptotically yet cannot improve. Such deadlock is due to players’ performance being bounded above by the performance of the best player in the group (and recall that the deviation is reciprocally dependent on the score, so a low error near the target line disallows radical change). Score by Error Under Four Conditions 0 5 10 15 20 25 30 35 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Size of Error (in Perception/Performance) S co re ( D is ta n ce F ro m T a rg e t L in e Random Piagetian Vygotskiian P-V Figure 3. Increasing players’ perception/execution “error” allows players to experience improved performance. The effect is more pronounced for the Piagetian (and hence the Piagetian–Vygotskiian) condition than for the Vygotskiian condition. In these experimental runs, there were 20 agents, the ZPD was set at 15, and the #-Vygotskiian-neighbors variable was at 4 (and 10 iterations were run per setting). ZPD and learning. In the Vygotskiian mode, the larger the ZPD, the better the group average learning (see, in Figure 4, below, the diagonal “Vygotskiian” graph line representing the inverse relation between the ZPD value and the mean distance from the target). That is, the larger the ZPD, the more players could bootstrap their performance by imitating better performers in the group. Moreover, for a ZPD of 60, even the lowest-performing players had cognitive access to the performance of the highestachieving players. This interpretation is supported by the observation that under extremely small ZPD settings, and especially when this is compounded with low “error” settings, some players are “left behind,” because they initially perform poorly and then subsequently never find players within their narrow zone upon whom to model their performance. Note the consistent Piagetian performance. Score by ZPD Under Four Conditions 0 5 10 15 20 25 30 35 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57 59 61 Size of ZPD S co re ( D is ta n ce F ro m T a rg e t L in e ) Random Piagetian Vigotskiian P-V Figure 4. Broadening the ZPD setting increases learning opportunities and, thus, decreases the group mean score in the Vygotskiian and Piagetian–Vygotskiian experimental modes but does not affect the Random or Piagetian modes. In these experimental runs, there were 20 agents, the error was at 4, and ‘#-Vygotskiian-neighbors’ was at 4 (and 30 iterations were run per setting). Subgroup size and learning. In the Vygotskiian mode, and collapsed over the ZPD settings, the more neighbors each player observes, the faster the group learns (see Figure 5, below), because each player has a higher chance of imitating another player who is better than themselves. Note the interaction with the Piagetian condition that is designed so as not to be affected by the number of neighbors available for imitation. Score by Sub-Group Size Under Four Conditions
منابع مشابه
Piaget? Vygotsky? I'm Game! 1 Running Head: Agent-based Modeling for Psychology Piaget? Vygotsky? I'm Game!: Agent-based Modeling for Psychology Research What Is an Agent-based Model?
We are currently in the process of revising this conference paper toward submission to a journal. Please contact the authors with any questions with regards to the current draft. Abstract We discuss agent-based models as research tools for developmental and social psychology. " Agents " are computer-based entities, e.g., individual people or animals. To construct an agent-based model, the model...
متن کاملRunning head: AGENT-BASED MODELING FOR PSYCHOLOGY Piaget? Vygotsky? I’m Game!: Agent-Based Modeling for Psychology Research
We discuss agent-based models as research tools for developmental and social psychology. " Agents " are computer-based entities, e.g., individual people or animals. To construct an agent-based model, the modeler assigns the agents real-world roles and rules, and then studies the model through conducting simulation experiments in which the agents follow their rules, and observes real-time data. ...
متن کاملDewey ’ s Dynamic Integration of Vygotsky and
Contrary to the assumptions of those who pair Dewey and Piaget based on progressivism’s recent history, Dewey shared broader concerns with Vygotsky (whose work he never read). Both Dewey and Vygotsky emphasized the role of cultural forms and meanings in perpetuating higher forms of human thought, whereas Piaget focused on the role played by logical and mathematical reasoning. On the other hand,...
متن کاملCognitive Precursors to Language
To answer both these questions people have turned to the work of Jean Piaget. And, more recently, they have also considered the writing of Soviet psychologist Lev Vygotsky. Discussion of the relationship between cognition and the child’s emerging use of language generally makes reference to both Piaget and Vygotsky. Piaget’s work was drawn upon in early research on first words (e.g. Bates, 1979...
متن کاملVygotsky, Piaget, and Education: A Reciprocal Assimilation of Theories and Educational Practices
Seeking a rapprochement between Vygotskians and Piagetians, the theories of Piaget and Vygotsky are compared, and educational extensions by their followers are examined. A paradox in Vygotsky’s theory is highlighted, where evidence is found both for claiming that Vygotsky was a behaviorist and that he was a constructivist. Similarities in the two theories are presented: social factors as having...
متن کاملAnalysis of the Evolutionary Game Theory in Agent-Based Computational Systems: OPEC Oil-Producing Countries
This study suggests a new method for analysing the behavioral economics issues in the framework of game theory. In this context, bounded rational agents interact with one another in a strategic manner. Therefore, conventional economic modeling techniques are unable to explaine this kind of interactions. In this regard, evolutionary game theory and agent-based modeling are known as the most suit...
متن کامل